谷歌的运营洪水预测系统是制定的,为机构和公众提供准确的实时洪水警告,重点是河流洪水在大型潮流的河流中。它在2018年开始运作,自从地理位置扩展以来。该预测系统由四个子系统组成:数据验证,阶段预测,淹没建模和警报分配。机器学习用于两个子系统。阶段预测采用长短期内存(LSTM)网络和线性模型进行建模。使用阈值和歧管模型计算洪水淹没,前者计算淹没程度,后者计算淹没程度和深度。本文首次提供的歧管模型提供了一种机器学习替代洪水淹没的液压建模。在评估历史数据时,所有型号都可以实现可操作使用的足够高的度量指标。 LSTM表现出比线性模型更高的技能,而阈值和歧管模型达到了类似的性能度量,以便在淹没程度上进行建模。在2021年的季风季节期间,洪水预警系统在印度和孟加拉国运营,覆盖河流的洪水区,总面积287,000平方公里,拥有350多万人。超过100米的洪水警报被发送给受影响的人口,相关当局以及紧急组织。系统上的当前和未来的工作包括将覆盖范围扩展到额外的洪水易发位置,以及提高建模能力和准确性。
translated by 谷歌翻译
This work profoundly analyzes discrete self-supervised speech representations through the eyes of Generative Spoken Language Modeling (GSLM). Following the findings of such an analysis, we propose practical improvements to the discrete unit for the GSLM. First, we start comprehending these units by analyzing them in three axes: interpretation, visualization, and resynthesis. Our analysis finds a high correlation between the speech units to phonemes and phoneme families, while their correlation with speaker or gender is weaker. Additionally, we found redundancies in the extracted units and claim that one reason may be the units' context. Following this analysis, we propose a new, unsupervised metric to measure unit redundancies. Finally, we use this metric to develop new methods that improve the robustness of units clustering and show significant improvement considering zero-resource speech metrics such as ABX. Code and analysis tools are available under the following link.
translated by 谷歌翻译
Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality. Project page: https://wnhsu.github.io/ReVISE.
translated by 谷歌翻译
Dual encoders are now the dominant architecture for dense retrieval. Yet, we have little understanding of how they represent text, and why this leads to good performance. In this work, we shed light on this question via distributions over the vocabulary. We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space. We show that the resulting distributions over vocabulary tokens are intuitive and contain rich semantic information. We find that this view can explain some of the failure cases of dense retrievers. For example, the inability of models to handle tail entities can be explained via a tendency of the token distributions to forget some of the tokens of those entities. We leverage this insight and propose a simple way to enrich query and passage representations with lexical information at inference time, and show that this significantly improves performance compared to the original model in out-of-domain settings.
translated by 谷歌翻译
Voice Conversion (VC) is the task of making a spoken utterance by one speaker sound as if uttered by a different speaker, while keeping other aspects like content unchanged. Current VC methods, focus primarily on spectral features like timbre, while ignoring the unique speaking style of people which often impacts prosody. In this study, we introduce a method for converting not only the timbre, but also prosodic information (i.e., rhythm and pitch changes) to those of the target speaker. The proposed approach is based on a pretrained, self-supervised, model for encoding speech to discrete units, which make it simple, effective, and easy to optimise. We consider the many-to-many setting with no paired data. We introduce a suite of quantitative and qualitative evaluation metrics for this setup, and empirically demonstrate the proposed approach is significantly superior to the evaluated baselines. Code and samples can be found under https://pages.cs.huji.ac.il/adiyoss-lab/dissc/ .
translated by 谷歌翻译
Indonesia holds the second-highest-ranking country for the highest number of malaria cases in Southeast Asia. A different malaria parasite semantic segmentation technique based on a deep learning approach is an alternative to reduce the limitations of traditional methods. However, the main problem of the semantic segmentation technique is raised since large parasites are dominant, and the tiny parasites are suppressed. In addition, the amount and variance of data are important influences in establishing their models. In this study, we conduct two contributions. First, we collect 559 microscopic images containing 691 malaria parasites of thin blood smears. The dataset is named PlasmoID, and most data comes from rural Indonesia. PlasmoID also provides ground truth for parasite detection and segmentation purposes. Second, this study proposes a malaria parasite segmentation and detection scheme by combining Faster RCNN and a semantic segmentation technique. The proposed scheme has been evaluated on the PlasmoID dataset. It has been compared with recent studies of semantic segmentation techniques, namely UNet, ResFCN-18, DeepLabV3, DeepLabV3plus and ResUNet-18. The result shows that our proposed scheme can improve the segmentation and detection of malaria parasite performance compared to original semantic segmentation techniques.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Many scientific domains gather sufficient labels to train machine algorithms through human-in-the-loop techniques provided by the Zooniverse.org citizen science platform. As the range of projects, task types and data rates increase, acceleration of model training is of paramount concern to focus volunteer effort where most needed. The application of Transfer Learning (TL) between Zooniverse projects holds promise as a solution. However, understanding the effectiveness of TL approaches that pretrain on large-scale generic image sets vs. images with similar characteristics possibly from similar tasks is an open challenge. We apply a generative segmentation model on two Zooniverse project-based data sets: (1) to identify fat droplets in liver cells (FatChecker; FC) and (2) the identification of kelp beds in satellite images (Floating Forests; FF) through transfer learning from the first project. We compare and contrast its performance with a TL model based on the COCO image set, and subsequently with baseline counterparts. We find that both the FC and COCO TL models perform better than the baseline cases when using >75% of the original training sample size. The COCO-based TL model generally performs better than the FC-based one, likely due to its generalized features. Our investigations provide important insights into usage of TL approaches on multi-domain data hosted across different Zooniverse projects, enabling future projects to accelerate task completion.
translated by 谷歌翻译
超参数优化是识别给定的机器学习模型的适当的超参数配置的过程。对于较小的数据集,可以进行详尽的搜索;但是,当数据大小和模型复杂性增加时,配置评估的数量成为主要计算瓶颈。解决此类问题的有希望的范式是基于替代物的优化。此范式基础的主要思想考虑了超参数空间与输出(目标)空间之间关系的增量更新模型;该模型的数据是通过评估主学习引擎来获得的,例如基于计算机的模型。通过学习近似超参数目标关系,可以使用替代(机器学习)模型来评分大量的超参数配置,并探索除直接机器学习引擎评估的配置空间的一部分。通常,在优化初始化之前选择替代物,并且在搜索过程中保持不变。我们调查了在优化本身期间代孕物质的动态切换是否是选择最合适的基于计算机的大规模在线推荐的最合适的分解模型的实用相关性的明智概念。我们对包含数亿个实例的数据集进行了基准测试,以针对既定基线,例如随机森林和高斯基于过程的替代物。结果表明,替代转换可以提供良好的性能,同时考虑学习引擎评估较少。
translated by 谷歌翻译
在多模式的行动识别中,重要的是,不仅要考虑不同方式的互补性,而且考虑全球动作内容。在本文中,我们提出了一个名为Modital Mixer(M-Mixer)网络的新颖网络,以利用跨模态和动作的时间上下文的互补信息进行多模式动作识别。我们还引入了一个简单而有效的复发单元,称为多模式上下文化单元(MCU),该单元(MCU)是M-Mixer的核心组成部分。我们的MCU在时间上编码具有其他模态的动作内容特征(例如Depth,ir)的动作内容特征。该过程鼓励M-Mixer利用全球行动内容,并补充其他模式的互补信息。结果,我们提出的方法优于NTU RGB+D 60,NTU RGB+D 120和NW-UCLA数据集的最先进方法。此外,我们通过进行全面的消融研究来证明M混合物的有效性。
translated by 谷歌翻译